Do you feel like everyone around you is smarter than you, and you are worried that they may discover you don't belong here? Or that you are faking it? Does it feel like you have gotten to your position due to a lucky break and not skill?
Welcome to the Impostor Syndrome. This is a strange psychological phenomenon that makes ordinary people – even brilliant ones – feel like they're frauds, fakes, inadequate, and undeserving. Even incredibly successful people like Neil Armstrong and Emma Watson feel like this sometimes.
Anyone can feel like this, especially if you are in a career in tech or research. Its particularly prevalent if you are from a diverse background (Women in STEM, international, etc). It can affect you at any stage of your career.
Come learn more about it from a fellow Impostor and hear about coping mechanisms to help you sidestep this feeling!
EcoCommons Australia hosts a variety of Advanced Ecological Modelling Notebooks aimed at bridging the gap between platform-based analysis tools and hands-on coding. EcoCommons offers notebooks in both Quarto markdown (.qmd) and Jupyter Notebook (.ipynb) formats, making it easier for users to interact with and customise workflows. This workshop will focus on a notebook which guides users through the process of calculating and visualising Species Richness and Shannon diversity indices, which are a commonly used metrics for analysing and conducting statistical comparisons for ecological surveys.
In this workshop we will cover:
The aim of the workshop is to make participants feel confident to run an EcoCommons notebook and understand the workflow in calculating and visualising Species Richness and Shannon diversity.
Are you looking to scale up your eResearch workflows to use really big data or compute? Interested in AI but no idea where to start? No worries, I got your back!
The Pawsey Supercomputing Research Centre is one of two Tier-1 supercomputing facilities in Australia. Pawsey is home to Australia's (and the southern hemisphere's!) most powerful supercomputer, Setonix, but we also provide a range of training, software engineering and research data consulting services to Australian researchers across a wide variety of research domains. We can help get you up and running, or work with you to help optimise your data and compute workflows to make sure you get the most out of your research tools.
In this talk, I'll provide an overview of Pawsey's high-performance computing and research data systems, as well as how we're working with researchers in Queensland to develop and optimise research software across domains like physics, chemistry, and health and life sciences. I'll also discuss the available schemes to get access to Pawsey's systems and services, and round it out with plenty of time for Q&A.
Discover how TERN’s national-scale infrastructure supports cutting-edge ecosystem research. This workshop introduces TERN’s open-access data, tools, and services, which help researchers monitor and understand Australia’s diverse environments. Through case studies and interactive demonstrations, participants will learn how to access and apply TERN resources to advance their work in ecosystem science, environmental management, and beyond. The workshop is ideal for researchers, students, environmental managers, and policymakers interested in advancing their understanding of Australian ecosystems through robust, open-access infrastructure.
Whether new to TERN or looking to deepen your engagement, this session will equip you with the knowledge and tools to harness TERN’s capabilities to support impactful ecosystem science.
In this interactive session, you'll explore how to shape a researcher profile that captures more than just your titles, it reflects your journey, your purpose, and what makes your work matter. Think less “template,” more you on the page. We’ll share tips for building authentic profiles across digital platforms so you can take what fits and make it your own.
This hands-on workshop builds on our ongoing research project aimed at understanding and demystifying the rapidly evolving ecosystem of research-specific generative AI (RGAI) tools. As generative AI becomes increasingly embedded in academic workflows, it is critical to develop methods for systematically evaluating these tools' capabilities, limitations, and implications.
Building on our inventory of RGAI platforms, the workshop will guide attendees through a structured evaluation process based on the extended app walkthrough method (Light, Burgess & Duguay, 2016). This method enables a critical mapping of each tool’s affordances, constraints, intended applications, and embedded norms.
Through collaborative analysis and discussion, we will explore how these walkthroughs can inform the development of 'report cards', a framework for communicating the technical and ethical dimensions of RGAI tools to broader academic and public audiences. Drawing on recent work (Snoswell et al., 2023; Gilbert et al., 2023), we will also consider how these evaluations can support future research into how generative AI is reshaping scholarly communication, authorship, and the academic publishing process.
By the end of the session, participants will have practical experience in auditing generative AI tools and a deeper understanding of how to critically engage with their integration into academic research and writing.
Clinical data sharing in health and medical research offers significant potential to uncover new discoveries and increase efficiency in the research process, which will overall improve patient outcomes. To support this, the Australian Research Data Commons (ARDC) has built national research infrastructure to enhance the discoverability, access, and reuse of data from health research studies across Australia.
In this workshop, participants will learn how to discover and request clinical trials data using one of ARDC’s key national platforms: Health Data Australia (HDA). The HDA platform is a national catalogue designed to connect researchers with valuable Australian health datasets (researchdata.edu.au/health/).
This workshop will also introduce a practical framework for the secondary use of clinical trials data, developed in collaboration with the Health Studies Australian National Data Asset (HeSANDA) and researchers from the NextGen Evidence Synthesis Team at the NHMRC Clinical Trials Centre, University of Sydney. The development of the practical framework for the secondary use of clinical trials data was also informed by research literature, stakeholder consultations, and expert guidance.
This workshop is designed for researchers, data scientists, and health professionals interested in leveraging secondary clinical trials data for new research. By the end of this workshop, participants will be able to:
eResearch workflows increasingly rely on new computing technologies, such as big data analytics, massively-parallel computer simulations, or large-scale AI training workloads. These technologies promises to vastly increase the scale and kind of research that's possible to do... if we can write our code to take full advantage of them. Making use of big compute and big data often requires a shift in thinking around how we structure our code, as inefficient software limits how effectively you can use these computing resources. Plus, time spent waiting for code or data pipelines to complete is time not spent doing research.
Squeezing every available bit of performance out of your code can be extremely time-consuming, but fortunately, a little bit of know-how can go a long way. This talk will provide an introduction to performance engineering, with a focus on practical techniques for compute- and data-heavy workflows. I will discuss important trends in research computing hardware and software, as well as how the prominence of high-level languages such as Python and R have changed the research computing landscape. Finally, I'll share some methodologies and open-source tools which will help you get the most 'bang for your buck' when optimising slow code in your research pipelines.
EcoCommons Australia offers a comprehensive suite of resources for ecological modelling, including an intuitive, user-friendly platform featuring thousands of trusted datasets and a range of expert-developed workflows for species distribution and community modelling.
This workshop will begin with a brief introduction to species distribution models (SDMs), followed by a guided tour of the EcoCommons platform. We will cover:
By the end of the workshop, attendees will understand how to run effective SDMs, select fit-for-purpose data, and produce accurate and meaningful results — all within a point-and-click environment.
This workshop introduces Metavaluation, an open-source framework for recognising and rewarding diverse research contributions—from data sharing and teaching to mentoring, organising, and behind-the-scenes work that often goes unseen.
Its key innovation lies in treating peer evaluations as valuable contributions themselves—using a self-referential process that pegs all other valuations to these reviews as a ‘base unit’ of value. This creates a dynamic feedback loop that directly rewards participation while generating transparent, relative value scores across all contributions.
In the first part of the workshop, we’ll explore the development of Metavaluation—from its origins as a radically transparent peer-review model aimed at accelerating Open Science, to its evolution through participatory festivals into a general-purpose framework for community-led evaluation. We’ll share pilot data from commons-oriented communities across science, arts, technology, and environmental sectors—demonstrating how the system flexibly recognises what matters to different communities, while also creating pathways for coordination between them.
In the second part, we’ll use ResBaz Queensland 2025 as a live case study. Participants will nominate and evaluate real contributions to the event—such as talks, mentorship, organising, or community support—via a simple, inclusive peer-review process. Together, we’ll generate a live dataset that maps what this community values most, and explore how this data can support recognition, guide future events, and connect aligned communities across domains.
You’ll leave with:
This workshop introduces the Australian Internet Observatory (AIO). Focusing on data donation approaches and the AIO dashboard, it will showcase how researchers of all skill levels can collect and study data using these tools. We will explore what topics can be investigated, what insights can be gained, and how the AIO can complement or expand existing research methodologies.
Participants will be introduced to the interfaces of digital media tools and their associated ethical considerations and limitations for social science and humanities research. This workshop aims to facilitate the use of digital media tools in research by introducing modern, cutting-edge resources that do not require sophisticated technical skills. These tools can be quickly deployed to explore a variety of research questions, allowing researchers to focus on their inquiries without a steep technical learning curve.
This workshop will include jargon busting around network technologies. And some hands-on activities. You will learn about networks and diagnostic tools, and where all these things fit in the researcher’s toolkit.
By the end of this workshop, you should be able to:
In this Data Carpentry workshop, learn how to import, work with, and plot vector and raster-format spatial data in R. The workshop also touches on spatial metadata (extent and coordinate reference systems), reprojecting spatial data, and working with raster time series data.
This workshop expects attendees to have used R, dplyr and ggplot2 before.
Concordances are a very useful tool for anyone who works with textual data. Concordancing tools (also referred to as Keyword in Context or KWIC tools) provide us with a listing of all occurrences of a word in a text along with some surrounding context. A concordance is therefore an excellent starting point to explore how a word is used in a text. Most KWIC tools also have the capability to sort results according to the context of the instances, by preceding words or following words or a combination of the two, and these capabilities allow us to extract more specific information from concordances.
In this workshop, you will:
Have you ever wondered why, thanks to your fast internet at home you can stream movies in UltraHD smoothly, but your data upload is super slow?
Sometimes sharing data with collaborators can be tedious, slow and frustrating.
This workshop addresses some data handling solutions in a research context. The workshop will introduce some research data movement tools, with a hands-on introduction to FileSender and a peek into Globus, a service that enables large-scale data transfers.
By the end of this workshop, you should be able to:
Come along and learn how to make data transfer easy and convenient for you.
Does your computer struggle with your research workload? Would you like to access extra resources and share a computer with collaborators?
Whether it's data analysis, simulation, or other computing work, the ARDC Nectar Research Cloud provides researchers with more computational power. This service gives you access to fast, secure, and powerful cloud computers, helping you to accelerate your research!
This session will feature exemplary cloud computing projects, useful support resources, and a Q&A opportunity. No prior knowledge or experience is required.
This practical workshop is designed for researchers seeking to automate and streamline the entire data lifecycle-from collection to visualisation-using Microsoft Forms and Power BI.
Participants will learn how to:
By the end of this session, attendees will be equipped to automate repetitive data tasks, reduce manual errors, and accelerate the journey from data collection to impactful research insights. No prior experience with Microsoft Forms or Power BI is required.
Learn basic data cleaning techniques in this hands-on workshop, working with structured text data and using open source software OpenRefine.
On completion of this workshop, participants should be able to:
This presentation is designed for researchers, postgrads, and academics who manage their own code but may not have a background in system administration. Participants will learn how to use Git, Ansible, and Docker — three powerful tools that can transform the way they manage, automate, and share their research code.
During the presentation, attendees will be guided step-by-step through using Git for version control, Ansible for automated environment configuration, and Docker for creating consistent, portable execution environments. By the end of the session, participants will understand the fundamentals of git, automating system setups, and how to run their code in reproducible containers.
Topics Covered:
Quick R spatial introduction, focusing on key concepts, and useful commands for handling and visualising vector data with essential R Spatial packages.
(Prior knowledge of R is required. Please only register if you have used R before, or if you took part in this ResBaz' R introduction.
This workshop will introduce methods for computational analysis of text, particularly core approaches from corpus linguistics (keyword analysis) and data science (topic modelling). Using pre-built computational notebooks from LADAL (https://ladal.edu.au/tools.html) we will introduce the steps and decisions needed to conduct a computational analysis of a textual corpus, and discuss how and where these approaches might fit into your research toolkit.
You will learn how to:
Note that this workshop is best taken following the workshop 'Exploring texts using concordances', but can be attended as a standalone. No experience programming in R is necessary.
This session will demystify some of the questions you might have about copyright and your research.
What you'll learn about:
Note, this session is designed to provide general copyright guidance, not legal advice.